36 research outputs found

    Dictionary preconditioning for greedy algorithms

    Get PDF
    This article presents an alteration of greedy algorithms like thresholding or (Orthogonal) Matching Pursuit which improves their performance in finding sparse signal representations in redundant dictionaries. These algorithms can be split into a sensing and a reconstruction step, and the former will fail to identify correct atoms if the cumulative coherence of the dictionary is too high. We thus modify the sensing step by introducing a special sensing matrix, also referred to as a measurement ensemble. The correct selection of components is then determined by th

    Atoms of all channels, unite! Average case analysis of multi-channel sparse recovery using greedy algorithms

    Get PDF
    This paper provides new results on computing simultaneous sparse approximations of multichannel signals over redundant dictionaries using two greedy algorithms. The first one, p-thresholding, selects the S atoms that have the largest pp-correlation while the second one, p-simultaneous matching pursuit (p-SOMP), is a generalisation of an algorithm studied by Tropp. We first provide exact recovery conditions as well as worst case analyses of all algorithms. The results, expressed using the standard cumulative coherence, are very reminiscent of the single channel case and, in particular, impose stringent restrictions on the dictionary. We unlock the situation by performing an average case analysis of both algorithms. First, we set up a general probabilistic signal model in which the coefficients of the atoms are drawn at random from the standard gaussian distribution. Second, we show that under this model, and with mild conditions on the coherence, the probability that p-thresholding and p-SOMP fail to recover the correct components is overwhelmingly small and gets smaller as the number of channels increases. Furthermore, we analyse the influence of selecting the set of correct atoms at random. We show that, if the dictionary satisfies a uniform uncertainty principle, the probability that simultaneous OMP fails to recover any sufficiently sparse set of atoms gets increasingly smaller as the number of channels increases

    Generalization Error in Deep Learning

    Get PDF
    Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results

    Dietary Supplements and Sports Performance: Introduction and Vitamins

    Get PDF
    Sports success is dependent primarily on genetic endowment in athletes with morphologic, psychologic, physiologic and metabolic traits specific to performance characteristics vital to their sport. Such genetically-endowed athletes must also receive optimal training to increase physical power, enhance mental strength, and provide a mechanical advantage. However, athletes often attempt to go beyond training and use substances and techniques, often referred to as ergogenics, in attempts to gain a competitive advantage. Pharmacological agents, such as anabolic steroids and amphetamines, have been used in the past, but such practices by athletes have led to the establishment of anti-doping legislation and effective testing protocols to help deter their use. Thus, many athletes have turned to various dietary strategies, including the use of various dietary supplements (sports supplements), which they presume to be effective, safe and legal

    Distributed sensing of noisy signals by thresholding of redundant expansions

    No full text
    This paper addresses the problem of sensing or recovering a signal s, captured by distributed low-complexity sensors. Each sensor observes a noisy version of the signal of interest, and independently forms an approximant of its observation. This approximant is sent to a central decoder that tries to recover the input signal by combining the multiple sensor outputs. We propose to use redundant dictionaries, and thresholding in the sensor nodes, in order to form sparse approximants of the noisy observations, with low computational complexity. We first show that the noise can actually be beneficial in the recovery of the correct components of the signal s, since it can advantageously perturb the naive thresholding scheme. Then we illustrate the benefit of multiple observations with uncorrelated noise. By careful reconstruction with a POCS strategy, each additional measurement actually helps to recover more and more components of the original signal, since it tends to isolate the common part in all observations. Experimental results demonstrate the interesting recovery performance of our distributed sensing system. They show that a few observations, represented by a small number of components, are able to provide a good approximation of the signal, even in very noisy conditions

    A Convergent Incoherent Dictionary Learning Algorithm for Sparse Coding

    No full text
    Abstract. Recently, sparse coding has been widely used in many ap-plications ranging from image recovery to pattern recognition. The low mutual coherence of a dictionary is an important property that ensures the optimality of the sparse code generated from this dictionary. Indeed, most existing dictionary learning methods for sparse coding either implic-itly or explicitly tried to learn an incoherent dictionary, which requires solving a very challenging non-convex optimization problem. In this pa-per, we proposed a hybrid alternating proximal algorithm for incoher-ent dictionary learning, and established its global convergence property. Such a convergent incoherent dictionary learning method is not only of theoretical interest, but also might benefit many sparse coding based applications
    corecore